43 research outputs found

    Adaptive estimation of linear functionals in the convolution model and applications

    Get PDF
    We consider the model Zi=Xi+ΔiZ_i=X_i+\varepsilon_i, for i.i.d. XiX_i's and Δi\varepsilon_i's and independent sequences (Xi)i∈N(X_i)_{i\in{\mathbb{N}}} and (Δi)i∈N(\varepsilon_i)_{i\in{\mathbb{N}}}. The density fΔf_{\varepsilon} of Δ1\varepsilon_1 is assumed to be known, whereas the one of X1X_1, denoted by gg, is unknown. Our aim is to estimate linear functionals of gg, for a known function $\psi$. We propose a general estimator of and study the rate of convergence of its quadratic risk as a function of the smoothness of gg, fΔf_{\varepsilon} and ψ\psi. Different contexts with dependent data, such as stochastic volatility and AutoRegressive Conditionally Heteroskedastic models, are also considered. An estimator which is adaptive to the smoothness of unknown gg is then proposed, following a method studied by Laurent et al. (Preprint (2006)) in the Gaussian white noise model. We give upper bounds and asymptotic lower bounds of the quadratic risk of this estimator. The results are applied to adaptive pointwise deconvolution, in which context losses in the adaptive rates are shown to be optimal in the minimax sense. They are also applied in the context of the stochastic volatility model.Comment: Published in at http://dx.doi.org/10.3150/08-BEJ146 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    State estimation in quantum homodyne tomography with noisy data

    Get PDF
    In the framework of noisy quantum homodyne tomography with efficiency parameter 0<η≀10 < \eta \leq 1, we propose two estimators of a quantum state whose density matrix elements ρm,n\rho_{m,n} decrease like e−B(m+n)r/2e^{-B(m+n)^{r/ 2}}, for fixed known B>0B>0 and 0<r≀20<r\leq 2. The first procedure estimates the matrix coefficients by a projection method on the pattern functions (that we introduce here for 0<η≀1/20<\eta \leq 1/2), the second procedure is a kernel estimator of the associated Wigner function. We compute the convergence rates of these estimators, in L2\mathbb{L}_2 risk

    Deconvolution for an atomic distribution: rates of convergence

    Get PDF
    Let X1,...,XnX_1,..., X_n be i.i.d.\ copies of a random variable X=Y+Z,X=Y+Z, where Xi=Yi+Zi, X_i=Y_i+Z_i, and YiY_i and ZiZ_i are independent and have the same distribution as YY and Z,Z, respectively. Assume that the random variables YiY_i's are unobservable and that Y=AV,Y=AV, where AA and VV are independent, AA has a Bernoulli distribution with probability of success equal to 1−p1-p and VV has a distribution function FF with density f.f. Let the random variable ZZ have a known distribution with density k.k. Based on a sample X1,...,Xn,X_1,...,X_n, we consider the problem of nonparametric estimation of the density ff and the probability p.p. Our estimators of ff and pp are constructed via Fourier inversion and kernel smoothing. We derive their convergence rates over suitable functional classes. By establishing in a number of cases the lower bounds for estimation of ff and pp we show that our estimators are rate-optimal in these cases.Comment: 27 page

    Rank penalized estimation of a quantum system

    Full text link
    We introduce a new method to reconstruct the density matrix ρ\rho of a system of nn-qubits and estimate its rank dd from data obtained by quantum state tomography measurements repeated mm times. The procedure consists in minimizing the risk of a linear estimator ρ^\hat{\rho} of ρ\rho penalized by given rank (from 1 to 2n2^n), where ρ^\hat{\rho} is previously obtained by the moment method. We obtain simultaneously an estimator of the rank and the resulting density matrix associated to this rank. We establish an upper bound for the error of penalized estimator, evaluated with the Frobenius norm, which is of order dn(4/3)n/mdn(4/3)^n /m and consistency for the estimator of the rank. The proposed methodology is computationaly efficient and is illustrated with some example states and real experimental data sets

    Adaptive variable selection in nonparametric sparse additive models

    Get PDF
    We consider the problem of recovery of an unknown multivariate signal f observed in a d-dimensional Gaussian white noise model of intensity Δ. We assume that f belongs to a class of smooth functions in L2([0, 1]d) and has an additive sparse structure determined by the parameter s, the number of non-zero univariate components contributing to f. We are interested in the case when d = dΔ →∞as Δ → 0 and the parameter s stays “small” relative to d. With these assumptions, the recovery problem in hand becomes that of determining which sparse additive components are non-zero. Attempting to reconstruct most, but not all, non-zero components of f, we arrive at the problem of almost full variable selection in high-dimensional regression. For two different choices of a class of smooth functions, we establish conditions under which almost full variable selection is possible, and provide a procedure that achieves this goal. Our procedure is the best possible (in the asymptotically minimax sense) for selecting most non-zero components of f. Moreover, it is adaptive in the parameter s. In addition to that, we complement the findings of [17] by obtaining an adaptive exact selector for the class of infinitely-smooth functions. Our theoretical results are illustrated with numerical experiments

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Statistical analysis of compressive low rank tomography with random measurements

    Get PDF
    We consider the statistical problem of 'compressive' estimation of low rank states (r«d ) with random basis measurements, where r, d are the rank and dimension of the state respectively. We investigate whether for a fixed sample size N, the estimation error associated with a 'compressive' measurement setup is 'close' to that of the setting where a large number of bases are measured. We generalise and extend previous results, and show that the mean square error (MSE) associated with the Frobenius norm attains the optimal rate rd/N with only O(rlogd) random basis measurements for all states. An important tool in the analysis is the concentration of the Fisher information matrix (FIM). We demonstrate that although a concentration of the MSE follows from a concentration of the FIM for most states, the FIM fails to concentrate for states with eigenvalues close to zero. We analyse this phenomenon in the case of a single qubit and demonstrate a concentration of the MSE about its optimal despite a lack of concentration of the FIM for states close to the boundary of the Bloch sphere. We also consider the estimation error in terms of a different metric–the quantum infidelity. We show that a concentration in the mean infidelity (MINF) does not exist uniformly over all states, highlighting the importance of loss function choice. Specifically, we show that for states that are nearly pure, the MINF scales as 1/√N but the constant converges to zero as the number of settings is increased. This demonstrates a lack of 'compressive' recovery for nearly pure states in this metric

    Signal detection for inverse problems in a multidimensional framework

    Get PDF
    International audienceThis paper is devoted to multi-dimensional inverse problems. In this setting, we address a goodness-of-fit testing problem. We investigate the separation rates associated to different kinds of smoothness assumptions and different degrees of ill-posedness
    corecore